9 research outputs found

    Using Phone Sensors and an Artificial Neural Network to Detect Gait Changes During Drinking Episodes in the Natural Environment

    Full text link
    Phone sensors could be useful in assessing changes in gait that occur with alcohol consumption. This study determined (1) feasibility of collecting gait-related data during drinking occasions in the natural environment, and (2) how gait-related features measured by phone sensors relate to estimated blood alcohol concentration (eBAC). Ten young adult heavy drinkers were prompted to complete a 5-step gait task every hour from 8pm to 12am over four consecutive weekends. We collected 3-xis accelerometer, gyroscope, and magnetometer data from phone sensors, and computed 24 gait-related features using a sliding window technique. eBAC levels were calculated at each time point based on Ecological Momentary Assessment (EMA) of alcohol use. We used an artificial neural network model to analyze associations between sensor features and eBACs in training (70% of the data) and validation and test (30% of the data) datasets. We analyzed 128 data points where both eBAC and gait-related sensor data was captured, either when not drinking (n=60), while eBAC was ascending (n=55) or eBAC was descending (n=13). 21 data points were captured at times when the eBAC was greater than the legal limit (0.08 mg/dl). Using a Bayesian regularized neural network, gait-related phone sensor features showed a high correlation with eBAC (Pearson's r > 0.9), and >95% of estimated eBAC would fall between -0.012 and +0.012 of actual eBAC. It is feasible to collect gait-related data from smartphone sensors during drinking occasions in the natural environment. Sensor-based features can be used to infer gait changes associated with elevated blood alcohol content

    Visual-Inertial Sensor Fusion Models and Algorithms for Context-Aware Indoor Navigation

    Get PDF
    Positioning in navigation systems is predominantly performed by Global Navigation Satellite Systems (GNSSs). However, while GNSS-enabled devices have become commonplace for outdoor navigation, their use for indoor navigation is hindered due to GNSS signal degradation or blockage. For this, development of alternative positioning approaches and techniques for navigation systems is an ongoing research topic. In this dissertation, I present a new approach and address three major navigational problems: indoor positioning, obstacle detection, and keyframe detection. The proposed approach utilizes inertial and visual sensors available on smartphones and are focused on developing: a framework for monocular visual internal odometry (VIO) to position human/object using sensor fusion and deep learning in tandem; an unsupervised algorithm to detect obstacles using sequence of visual data; and a supervised context-aware keyframe detection. The underlying technique for monocular VIO is a recurrent convolutional neural network for computing six-degree-of-freedom (6DoF) in an end-to-end fashion and an extended Kalman filter module for fine-tuning the scale parameter based on inertial observations and managing errors. I compare the results of my featureless technique with the results of conventional feature-based VIO techniques and manually-scaled results. The comparison results show that while the framework is more effective compared to featureless method and that the accuracy is improved, the accuracy of feature-based method still outperforms the proposed approach. The approach for obstacle detection is based on processing two consecutive images to detect obstacles. Conducting experiments and comparing the results of my approach with the results of two other widely used algorithms show that my algorithm performs better; 82% precision compared with 69%. In order to determine the decent frame-rate extraction from video stream, I analyzed movement patterns of camera and inferred the context of the user to generate a model associating movement anomaly with proper frames-rate extraction. The output of this model was utilized for determining the rate of keyframe extraction in visual odometry (VO). I defined and computed the effective frames for VO and experimented with and used this approach for context-aware keyframe detection. The results show that the number of frames, using inertial data to infer the decent frames, is decreased

    An Artificial Neural Network for Movement Pattern Analysis to Estimate Blood Alcohol Content Level

    Get PDF
    Impairments in gait occur after alcohol consumption, and, if detected in real-time, could guide the delivery of ā€œjust-in-timeā€ injury prevention interventions. We aimed to identify the salient features of gait that could be used for estimating blood alcohol content (BAC) level in a typical drinking environment. We recruited 10 young adults with a history of heavy drinking to test our research app. During four consecutive Fridays and Saturdays, every hour from 8 p.m. to 12 a.m., they were prompted to use the app to report alcohol consumption and complete a 5-step straight-line walking task, during which 3-axis acceleration and angular velocity data was sampled at a frequency of 100 Hz. BAC for each subject was calculated. From sensor signals, 24 features were calculated using a sliding window technique, including energy, mean, and standard deviation. Using an artificial neural network (ANN), we performed regression analysis to define a model determining association between gait features and BACs. Part (70%) of the data was then used as a training dataset, and the results tested and validated using the rest of the samples. We evaluated different training algorithms for the neural network and the result showed that a Bayesian regularization neural network (BRNN) was the most efficient and accurate. Analyses support the use of the tandem gait task paired with our approach to reliably estimate BAC based on gait features. Results from this work could be useful in designing effective prevention interventions to reduce risky behaviors during periods of alcohol consumption

    An Artificial Neural Network for Movement Pattern Analysis to Estimate Blood Alcohol Content Level

    No full text
    Impairments in gait occur after alcohol consumption, and, if detected in real-time, could guide the delivery of ā€œjust-in-timeā€ injury prevention interventions. We aimed to identify the salient features of gait that could be used for estimating blood alcohol content (BAC) level in a typical drinking environment. We recruited 10 young adults with a history of heavy drinking to test our research app. During four consecutive Fridays and Saturdays, every hour from 8 p.m. to 12 a.m., they were prompted to use the app to report alcohol consumption and complete a 5-step straight-line walking task, during which 3-axis acceleration and angular velocity data was sampled at a frequency of 100 Hz. BAC for each subject was calculated. From sensor signals, 24 features were calculated using a sliding window technique, including energy, mean, and standard deviation. Using an artificial neural network (ANN), we performed regression analysis to define a model determining association between gait features and BACs. Part (70%) of the data was then used as a training dataset, and the results tested and validated using the rest of the samples. We evaluated different training algorithms for the neural network and the result showed that a Bayesian regularization neural network (BRNN) was the most efficient and accurate. Analyses support the use of the tandem gait task paired with our approach to reliably estimate BAC based on gait features. Results from this work could be useful in designing effective prevention interventions to reduce risky behaviors during periods of alcohol consumption
    corecore